Running Primal-Dual Gradient Method for Time-Varying Nonconvex Problems
نویسندگان
چکیده
This paper focuses on a time-varying constrained nonconvex optimization problem, and considers the synthesis analysis of online regularized primal-dual gradient methods to track Karush--Kuhn--Tucker (KKT) trajectory. The proposed method is implemented in running fashion, sense that underlying problem changes during execution algorithms. In order study its performance, we first derive continuous-time limit as system differential inclusions. We then sufficient conditions for tracking KKT trajectory, also asymptotic bounds error (as function time-variability trajectory). Further, provide set trajectories not bifurcate or merge, investigate optimal choice parameters algorithm. Illustrative numerical results are provided.
منابع مشابه
Adaptive Primal-dual Hybrid Gradient Methods for Saddle-point Problems
The Primal-Dual hybrid gradient (PDHG) method is a powerful optimization scheme that breaks complex problems into simple sub-steps. Unfortunately, PDHG methods require the user to choose stepsize parameters, and the speed of convergence is highly sensitive to this choice. We introduce new adaptive PDHG schemes that automatically tune the stepsize parameters for fast convergence without user inp...
متن کاملA Preconditioner for A Primal-Dual Newton Conjugate Gradient Method for Compressed Sensing Problems
In this paper we are concerned with the solution of Compressed Sensing (CS) problems where the signals to be recovered are sparse in coherent and redundant dictionaries. We extend the primal-dual Newton Conjugate Gradients (pdNCG) method in [11] for CS problems. We provide an inexpensive and provably effective preconditioning technique for linear systems using pdNCG. Numerical results are prese...
متن کاملThe Primal-Dual Hybrid Gradient Method for Semiconvex Splittings
This paper deals with the analysis of a recent reformulation of the primal-dual hybrid gradient method, which allows one to apply it to nonconvex regularizers. Particularly, it investigates variational problems for which the energy to be minimized can be written as G(u) + F (Ku), where G is convex, F is semiconvex, and K is a linear operator. We study the method and prove convergence in the cas...
متن کاملGradient Primal-Dual Algorithm Converges to Second-Order Stationary Solutions for Nonconvex Distributed Optimization
In this work, we study two first-order primal-dual based algorithms, the Gradient Primal-Dual Algorithm (GPDA) and the Gradient Alternating Direction Method of Multipliers (GADMM), for solving a class of linearly constrained non-convex optimization problems. We show that with random initialization of the primal and dual variables, both algorithms are able to compute second-order stationary solu...
متن کاملPrimal-Dual Subgradient Method for Huge-Scale Linear Conic Problems
In this paper we develop a primal-dual subgradient method for solving huge-scale Linear Conic Optimization Problems. Our main assumption is that the primal cone is formed as a direct product of many small-dimensional convex cones, and that the matrix A of corresponding linear operator is uniformly sparse. In this case, our method can approximate the primal-dual optimal solution with accuracy ε ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Siam Journal on Control and Optimization
سال: 2022
ISSN: ['0363-0129', '1095-7138']
DOI: https://doi.org/10.1137/20m1371063